Exercise 6.1

  1. Grid approximation evaluates f(θ | y) at a finite, discrete grid of possible θ values by randomly taking a sample of N independent θ values {θ(1), θ(2), …, θ(N)} from the discretized pdf to approximate the full posterior pdf f(θ | y). The algorithm evolves in four steps:
  1. Define a discrete grid of possible θ values.

  2. Evaluate the prior pdf f(θ) and likelihood function L(θ | y) at each θ grid value.

  3. Obtain a discrete approximation of the posterior pdf f(θ | y) by: calculating the product f(θ)L(θ | y) at each θ grid value; and then ormalizing the products so that they sum to 1 across all θ.

  4. Randomly sample N θ grid values with respect to their corresponding normalized posterior probabilities.

  1. I would change step 1 to specify a discrete grid of reasonable values of the parameter of interest using a plot of the prior pdf and likelihood function so that I could see what ranges of values are implausible (though possible). With this in mind, I’ll set up a discrete grid of λ values between a certain range, essentially truncating the posterior’s tail and making the approximation more accurate.

Exercise 6.2

knitr::include_graphics("image/chain_a.png")

knitr::include_graphics("image/chain_b.png")

knitr::include_graphics("image/chain_c.png")

knitr::include_graphics("image/chain_d.png")

Exercise 6.3

  1. A slow mixing chain with strong trends, small sample size ratio indicates an unstable chain that has not found the range of posterior plausible π values and exhibits strong autocorrelation that drops off very slowly in the Markov chain values which makes them less like independent sampling and generates greater error in the resulting posterior approximation. It would also take a long time for it to adequately explore the full range of the posterior since it moves slowly around the range of posterior plausible values. It may only roughly explored π values in a certain range in its iterations. Thus its posterior approximation will overestimate the plausibility of π values in that range while completely underestimating the plausibility of values outside this range.

  2. Strong correlation between pairs of Markov chain values means strong autocorrelation among the Markov chain values which indicates that the dependence between chain values does not quickly fade away throughout iterations. It indicates that the chain values are less like independent sampling and the chain is mixing slowly. This leads to longer time of simulation, skewed estimate, erroneous approximations of the posterior (as in part a) and thus misleading posterior conclusions.

  3. When the chain gets stuck when it visits the certain range(s) of values of π (completely flat lines in the trace plot), the chain will over-sample some values in the corresponding tail of the posterior π values. This phenomenon produces the erroneous spikes in the posterior approximation.

Exercise 6.4

  1. In practice, we run rstan simulations to approximate the posterior when the models we are interested in analyzing get too complicated to mathematically specify. This means that we won’t have the privilege of being able to compare our simulation results to the “real” posterior which is why diagnostics are important to help us assess our approximations to reach more accurate posterior models and conclusions instead of skewed and erroneous posterior models.

  2. MCMC simulations are helpful in that they provide a more flexible alternative to grid approximation when we work with models that have lots of model parameters. Grid approximation suffers from the curse of dimensionality when simulating multivariate posteriors: It requires dividing the multidimensional sample space of θ = (θ1, θ2, …, θk) into an extremely fine grid to prevent big gaps in the approximation. In practice, this might not be feasible in terms of time and money. Like grid approximation samples, MCMC samples are not taken directly from the posterior pdf. Though unlike grid approximation samples, MCMC samples are not independent – each subsequent sample value depends directly upon the previous value - with reasonable MCMC algorithms, it mathematically works since the algorithms have a steeper learning curve than the grid approximation technique.

  3. RStan is one MCMC computing resources that can conduct MCMC simulation for us: It can do the double duty of identifying an appropriate MCMC algorithm for simulating the given model, and then applying this algorithm to the data. Using the rstan package combines the power of R with the Stan engine which can design and run an MCMC algorithm to produce an approximate sample from the posterior.

  4. What don’t you understand about the chapter? Why does approximating the Normal-Normal model in the rstan syntax indicate the length of the vector following the vector instead of Y as it does in other models like Beta-Binomial and Gamma-Poisson.

Can we produce trace plots for each of the chains separately?

# Load packages
library(tidyverse)
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.1 ──
## ✓ ggplot2 3.3.5     ✓ purrr   0.3.4
## ✓ tibble  3.1.4     ✓ dplyr   1.0.7
## ✓ tidyr   1.1.3     ✓ stringr 1.4.0
## ✓ readr   2.0.1     ✓ forcats 0.5.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## x dplyr::filter() masks stats::filter()
## x dplyr::lag()    masks stats::lag()
library(janitor)
## 
## Attaching package: 'janitor'
## The following objects are masked from 'package:stats':
## 
##     chisq.test, fisher.test
library(rstan)
## Loading required package: StanHeaders
## rstan (Version 2.21.2, GitRev: 2e1f913d3ca3)
## For execution on a local, multicore CPU with excess RAM we recommend calling
## options(mc.cores = parallel::detectCores()).
## To avoid recompilation of unchanged Stan programs, we recommend calling
## rstan_options(auto_write = TRUE)
## 
## Attaching package: 'rstan'
## The following object is masked from 'package:tidyr':
## 
##     extract
library(bayesplot)
## This is bayesplot version 1.8.1
## - Online documentation and vignettes at mc-stan.org/bayesplot
## - bayesplot theme set to bayesplot::theme_default()
##    * Does _not_ affect other ggplot2 plots
##    * See ?bayesplot_theme_set for details on theme setting
library(bayesrules)
library(ggplot2)

Exercise 6.5

# Step 1: Define a grid of 5 pi values
grid_data_1 <- data.frame(pi_grid_1 = seq(from = 0, to = 1, length = 5))
# Step 2: Evaluate the prior & likelihood at each pi
grid_data_1 <- grid_data_1 %>% 
  mutate(prior_1 = dbeta(pi_grid_1, 3, 8), 
likelihood_1 = dbinom(2, 10, pi_grid_1))
# Step 3: Approximate the posterior
grid_data_1 <- grid_data_1 %>% 
  mutate(unnormalized_1 = likelihood_1 * prior_1,
posterior_1 = unnormalized_1 / sum(unnormalized_1))
# Confirm that the posterior approximation sums to 1
grid_data_1 %>%
summarize(sum(unnormalized_1), sum(posterior_1))
##   sum(unnormalized_1) sum(posterior_1)
## 1           0.8765603                1
# Examine the grid approximated posterior
round(grid_data_1, 2)
##   pi_grid_1 prior_1 likelihood_1 unnormalized_1 posterior_1
## 1      0.00    0.00         0.00           0.00        0.00
## 2      0.25    3.00         0.28           0.85        0.96
## 3      0.50    0.70         0.04           0.03        0.04
## 4      0.75    0.01         0.00           0.00        0.00
## 5      1.00    0.00         0.00           0.00        0.00
# Plot the grid approximated posterior
ggplot(grid_data_1, aes(x = pi_grid_1, y = posterior_1)) +
geom_point() +
geom_segment(aes(x = pi_grid_1, xend = pi_grid_1, y = 0, yend = posterior_1)) 

set.seed(6)
# Step 4: sample from the discretized posterior
post_sample_1 <- sample_n(grid_data_1, size = 10000,
weight = posterior_1, replace = TRUE)
# A table of the 10000 sample values
post_sample_1 %>% 
  tabyl(pi_grid_1) %>% 
  adorn_totals("row")
##  pi_grid_1     n percent
##       0.25  9631  0.9631
##        0.5   369  0.0369
##      Total 10000  1.0000
# Histogram of the grid simulation with posterior pdf
ggplot(post_sample_1, aes(x = pi_grid_1)) + 
  geom_histogram(aes(y = ..density..), color = "white")+ 
    stat_function(fun = dbeta, args = list(5, 16)) +
lims(x = c(0, 1))
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 2 rows containing missing values (geom_bar).

# Step 1: Define a grid of 201 pi values
grid_data_2 <- data.frame(pi_grid_2 = seq(from = 0, to = 1, length = 201))
# Step 2: Evaluate the prior & likelihood at each pi
grid_data_2 <- grid_data_2 %>% 
  mutate(prior_2 = dbeta(pi_grid_2, 3, 8), 
likelihood_2 = dbinom(2, 10, pi_grid_2))
# Step 3: Approximate the posterior
grid_data_2 <- grid_data_2 %>% 
  mutate(unnormalized_2 = likelihood_2 * prior_2,
posterior_2 = unnormalized_2 / sum(unnormalized_2))
# Confirm that the posterior approximation sums to 1
grid_data_2 %>%
summarize(sum(unnormalized_2), sum(posterior_2))
##   sum(unnormalized_2) sum(posterior_2)
## 1            41.79567                1
# Examine the grid approximated posterior
round(grid_data_2, 2)
##     pi_grid_2 prior_2 likelihood_2 unnormalized_2 posterior_2
## 1        0.00    0.00         0.00           0.00        0.00
## 2        0.00    0.01         0.00           0.00        0.00
## 3        0.01    0.03         0.00           0.00        0.00
## 4        0.01    0.07         0.01           0.00        0.00
## 5        0.02    0.13         0.02           0.00        0.00
## 6        0.03    0.19         0.02           0.00        0.00
## 7        0.03    0.26         0.03           0.01        0.00
## 8        0.04    0.34         0.04           0.01        0.00
## 9        0.04    0.43         0.05           0.02        0.00
## 10       0.04    0.53         0.06           0.03        0.00
## 11       0.05    0.63         0.07           0.05        0.00
## 12       0.06    0.73         0.09           0.06        0.00
## 13       0.06    0.84         0.10           0.08        0.00
## 14       0.06    0.95         0.11           0.11        0.00
## 15       0.07    1.06         0.12           0.13        0.00
## 16       0.07    1.17         0.14           0.16        0.00
## 17       0.08    1.29         0.15           0.19        0.00
## 18       0.09    1.40         0.16           0.22        0.01
## 19       0.09    1.51         0.17           0.26        0.01
## 20       0.10    1.62         0.18           0.30        0.01
## 21       0.10    1.72         0.19           0.33        0.01
## 22       0.10    1.83         0.20           0.37        0.01
## 23       0.11    1.93         0.21           0.41        0.01
## 24       0.12    2.02         0.22           0.45        0.01
## 25       0.12    2.12         0.23           0.49        0.01
## 26       0.12    2.21         0.24           0.53        0.01
## 27       0.13    2.30         0.25           0.57        0.01
## 28       0.14    2.38         0.26           0.61        0.01
## 29       0.14    2.45         0.26           0.65        0.02
## 30       0.14    2.53         0.27           0.68        0.02
## 31       0.15    2.60         0.28           0.72        0.02
## 32       0.16    2.66         0.28           0.75        0.02
## 33       0.16    2.72         0.29           0.78        0.02
## 34       0.16    2.77         0.29           0.80        0.02
## 35       0.17    2.82         0.29           0.83        0.02
## 36       0.18    2.87         0.30           0.85        0.02
## 37       0.18    2.91         0.30           0.87        0.02
## 38       0.18    2.94         0.30           0.88        0.02
## 39       0.19    2.97         0.30           0.89        0.02
## 40       0.20    3.00         0.30           0.90        0.02
## 41       0.20    3.02         0.30           0.91        0.02
## 42       0.21    3.04         0.30           0.92        0.02
## 43       0.21    3.05         0.30           0.92        0.02
## 44       0.22    3.06         0.30           0.92        0.02
## 45       0.22    3.06         0.30           0.91        0.02
## 46       0.22    3.06         0.30           0.91        0.02
## 47       0.23    3.06         0.29           0.90        0.02
## 48       0.24    3.05         0.29           0.89        0.02
## 49       0.24    3.04         0.29           0.88        0.02
## 50       0.24    3.02         0.29           0.86        0.02
## 51       0.25    3.00         0.28           0.85        0.02
## 52       0.26    2.98         0.28           0.83        0.02
## 53       0.26    2.96         0.27           0.81        0.02
## 54       0.26    2.93         0.27           0.79        0.02
## 55       0.27    2.90         0.26           0.77        0.02
## 56       0.28    2.87         0.26           0.74        0.02
## 57       0.28    2.83         0.25           0.72        0.02
## 58       0.29    2.79         0.25           0.70        0.02
## 59       0.29    2.75         0.24           0.67        0.02
## 60       0.30    2.71         0.24           0.65        0.02
## 61       0.30    2.67         0.23           0.62        0.01
## 62       0.30    2.62         0.23           0.60        0.01
## 63       0.31    2.58         0.22           0.57        0.01
## 64       0.32    2.53         0.22           0.55        0.01
## 65       0.32    2.48         0.21           0.52        0.01
## 66       0.32    2.43         0.20           0.50        0.01
## 67       0.33    2.38         0.20           0.47        0.01
## 68       0.34    2.32         0.19           0.45        0.01
## 69       0.34    2.27         0.19           0.43        0.01
## 70       0.35    2.22         0.18           0.40        0.01
## 71       0.35    2.16         0.18           0.38        0.01
## 72       0.36    2.11         0.17           0.36        0.01
## 73       0.36    2.05         0.16           0.34        0.01
## 74       0.36    2.00         0.16           0.32        0.01
## 75       0.37    1.94         0.15           0.30        0.01
## 76       0.38    1.89         0.15           0.28        0.01
## 77       0.38    1.83         0.14           0.26        0.01
## 78       0.38    1.78         0.14           0.24        0.01
## 79       0.39    1.72         0.13           0.23        0.01
## 80       0.40    1.67         0.13           0.21        0.01
## 81       0.40    1.61         0.12           0.19        0.00
## 82       0.41    1.56         0.12           0.18        0.00
## 83       0.41    1.51         0.11           0.17        0.00
## 84       0.42    1.45         0.11           0.15        0.00
## 85       0.42    1.40         0.10           0.14        0.00
## 86       0.42    1.35         0.10           0.13        0.00
## 87       0.43    1.30         0.09           0.12        0.00
## 88       0.44    1.25         0.09           0.11        0.00
## 89       0.44    1.20         0.08           0.10        0.00
## 90       0.44    1.16         0.08           0.09        0.00
## 91       0.45    1.11         0.08           0.08        0.00
## 92       0.46    1.06         0.07           0.08        0.00
## 93       0.46    1.02         0.07           0.07        0.00
## 94       0.47    0.98         0.07           0.06        0.00
## 95       0.47    0.93         0.06           0.06        0.00
## 96       0.48    0.89         0.06           0.05        0.00
## 97       0.48    0.85         0.06           0.05        0.00
## 98       0.48    0.81         0.05           0.04        0.00
## 99       0.49    0.78         0.05           0.04        0.00
## 100      0.50    0.74         0.05           0.03        0.00
## 101      0.50    0.70         0.04           0.03        0.00
## 102      0.50    0.67         0.04           0.03        0.00
## 103      0.51    0.64         0.04           0.02        0.00
## 104      0.52    0.60         0.04           0.02        0.00
## 105      0.52    0.57         0.03           0.02        0.00
## 106      0.52    0.54         0.03           0.02        0.00
## 107      0.53    0.51         0.03           0.02        0.00
## 108      0.54    0.48         0.03           0.01        0.00
## 109      0.54    0.46         0.03           0.01        0.00
## 110      0.54    0.43         0.02           0.01        0.00
## 111      0.55    0.41         0.02           0.01        0.00
## 112      0.56    0.38         0.02           0.01        0.00
## 113      0.56    0.36         0.02           0.01        0.00
## 114      0.57    0.34         0.02           0.01        0.00
## 115      0.57    0.32         0.02           0.01        0.00
## 116      0.58    0.30         0.02           0.00        0.00
## 117      0.58    0.28         0.01           0.00        0.00
## 118      0.58    0.26         0.01           0.00        0.00
## 119      0.59    0.24         0.01           0.00        0.00
## 120      0.60    0.23         0.01           0.00        0.00
## 121      0.60    0.21         0.01           0.00        0.00
## 122      0.60    0.20         0.01           0.00        0.00
## 123      0.61    0.18         0.01           0.00        0.00
## 124      0.62    0.17         0.01           0.00        0.00
## 125      0.62    0.16         0.01           0.00        0.00
## 126      0.62    0.15         0.01           0.00        0.00
## 127      0.63    0.14         0.01           0.00        0.00
## 128      0.64    0.13         0.01           0.00        0.00
## 129      0.64    0.12         0.01           0.00        0.00
## 130      0.64    0.11         0.00           0.00        0.00
## 131      0.65    0.10         0.00           0.00        0.00
## 132      0.66    0.09         0.00           0.00        0.00
## 133      0.66    0.08         0.00           0.00        0.00
## 134      0.66    0.08         0.00           0.00        0.00
## 135      0.67    0.07         0.00           0.00        0.00
## 136      0.68    0.06         0.00           0.00        0.00
## 137      0.68    0.06         0.00           0.00        0.00
## 138      0.69    0.05         0.00           0.00        0.00
## 139      0.69    0.05         0.00           0.00        0.00
## 140      0.70    0.04         0.00           0.00        0.00
## 141      0.70    0.04         0.00           0.00        0.00
## 142      0.70    0.03         0.00           0.00        0.00
## 143      0.71    0.03         0.00           0.00        0.00
## 144      0.72    0.03         0.00           0.00        0.00
## 145      0.72    0.03         0.00           0.00        0.00
## 146      0.72    0.02         0.00           0.00        0.00
## 147      0.73    0.02         0.00           0.00        0.00
## 148      0.74    0.02         0.00           0.00        0.00
## 149      0.74    0.02         0.00           0.00        0.00
## 150      0.74    0.01         0.00           0.00        0.00
## 151      0.75    0.01         0.00           0.00        0.00
## 152      0.76    0.01         0.00           0.00        0.00
## 153      0.76    0.01         0.00           0.00        0.00
## 154      0.76    0.01         0.00           0.00        0.00
## 155      0.77    0.01         0.00           0.00        0.00
## 156      0.78    0.01         0.00           0.00        0.00
## 157      0.78    0.01         0.00           0.00        0.00
## 158      0.78    0.00         0.00           0.00        0.00
## 159      0.79    0.00         0.00           0.00        0.00
## 160      0.80    0.00         0.00           0.00        0.00
## 161      0.80    0.00         0.00           0.00        0.00
## 162      0.80    0.00         0.00           0.00        0.00
## 163      0.81    0.00         0.00           0.00        0.00
## 164      0.82    0.00         0.00           0.00        0.00
## 165      0.82    0.00         0.00           0.00        0.00
## 166      0.83    0.00         0.00           0.00        0.00
## 167      0.83    0.00         0.00           0.00        0.00
## 168      0.84    0.00         0.00           0.00        0.00
## 169      0.84    0.00         0.00           0.00        0.00
## 170      0.84    0.00         0.00           0.00        0.00
## 171      0.85    0.00         0.00           0.00        0.00
## 172      0.86    0.00         0.00           0.00        0.00
## 173      0.86    0.00         0.00           0.00        0.00
## 174      0.86    0.00         0.00           0.00        0.00
## 175      0.87    0.00         0.00           0.00        0.00
## 176      0.88    0.00         0.00           0.00        0.00
## 177      0.88    0.00         0.00           0.00        0.00
## 178      0.88    0.00         0.00           0.00        0.00
## 179      0.89    0.00         0.00           0.00        0.00
## 180      0.90    0.00         0.00           0.00        0.00
## 181      0.90    0.00         0.00           0.00        0.00
## 182      0.90    0.00         0.00           0.00        0.00
## 183      0.91    0.00         0.00           0.00        0.00
## 184      0.92    0.00         0.00           0.00        0.00
## 185      0.92    0.00         0.00           0.00        0.00
## 186      0.92    0.00         0.00           0.00        0.00
## 187      0.93    0.00         0.00           0.00        0.00
## 188      0.94    0.00         0.00           0.00        0.00
## 189      0.94    0.00         0.00           0.00        0.00
## 190      0.95    0.00         0.00           0.00        0.00
## 191      0.95    0.00         0.00           0.00        0.00
## 192      0.96    0.00         0.00           0.00        0.00
## 193      0.96    0.00         0.00           0.00        0.00
## 194      0.96    0.00         0.00           0.00        0.00
## 195      0.97    0.00         0.00           0.00        0.00
## 196      0.98    0.00         0.00           0.00        0.00
## 197      0.98    0.00         0.00           0.00        0.00
## 198      0.98    0.00         0.00           0.00        0.00
## 199      0.99    0.00         0.00           0.00        0.00
## 200      1.00    0.00         0.00           0.00        0.00
## 201      1.00    0.00         0.00           0.00        0.00
# Plot the grid approximated posterior
ggplot(grid_data_2, aes(x = pi_grid_2, y = posterior_2)) +
geom_point() +
geom_segment(aes(x = pi_grid_2, xend = pi_grid_2, y = 0, yend = posterior_2)) 

set.seed(16)
# Step 4: sample from the discretized posterior
post_sample_2 <- sample_n(grid_data_2, size = 10000,
weight = posterior_2, replace = TRUE)
# A table of the 10000 sample values
post_sample_2 %>% 
  tabyl(pi_grid_2) %>% 
  adorn_totals("row")
##  pi_grid_2     n percent
##       0.02     1  0.0001
##       0.03     2  0.0002
##      0.035     2  0.0002
##       0.04     4  0.0004
##      0.045     6  0.0006
##       0.05    15  0.0015
##      0.055    18  0.0018
##       0.06    20  0.0020
##      0.065    22  0.0022
##       0.07    30  0.0030
##      0.075    36  0.0036
##       0.08    54  0.0054
##      0.085    55  0.0055
##       0.09    57  0.0057
##      0.095    56  0.0056
##        0.1    72  0.0072
##      0.105    78  0.0078
##       0.11   109  0.0109
##      0.115   109  0.0109
##       0.12    95  0.0095
##      0.125   146  0.0146
##       0.13   135  0.0135
##      0.135   150  0.0150
##       0.14   162  0.0162
##      0.145   155  0.0155
##       0.15   166  0.0166
##      0.155   177  0.0177
##       0.16   205  0.0205
##      0.165   186  0.0186
##       0.17   177  0.0177
##      0.175   209  0.0209
##       0.18   205  0.0205
##      0.185   209  0.0209
##       0.19   207  0.0207
##      0.195   201  0.0201
##        0.2   232  0.0232
##      0.205   230  0.0230
##       0.21   234  0.0234
##      0.215   219  0.0219
##       0.22   208  0.0208
##      0.225   217  0.0217
##       0.23   218  0.0218
##      0.235   215  0.0215
##       0.24   203  0.0203
##      0.245   215  0.0215
##       0.25   209  0.0209
##      0.255   212  0.0212
##       0.26   184  0.0184
##      0.265   174  0.0174
##       0.27   178  0.0178
##      0.275   165  0.0165
##       0.28   159  0.0159
##      0.285   173  0.0173
##       0.29   177  0.0177
##      0.295   153  0.0153
##        0.3   145  0.0145
##      0.305   144  0.0144
##       0.31   149  0.0149
##      0.315   149  0.0149
##       0.32   126  0.0126
##      0.325   124  0.0124
##       0.33   121  0.0121
##      0.335   125  0.0125
##       0.34   105  0.0105
##      0.345   113  0.0113
##       0.35    97  0.0097
##      0.355    82  0.0082
##       0.36    81  0.0081
##      0.365    63  0.0063
##       0.37    62  0.0062
##      0.375    74  0.0074
##       0.38    65  0.0065
##      0.385    55  0.0055
##       0.39    49  0.0049
##      0.395    62  0.0062
##        0.4    52  0.0052
##      0.405    36  0.0036
##       0.41    44  0.0044
##      0.415    41  0.0041
##       0.42    36  0.0036
##      0.425    35  0.0035
##       0.43    30  0.0030
##      0.435    19  0.0019
##       0.44    20  0.0020
##      0.445    19  0.0019
##       0.45    13  0.0013
##      0.455    17  0.0017
##       0.46    14  0.0014
##      0.465    15  0.0015
##       0.47     8  0.0008
##      0.475    12  0.0012
##       0.48     7  0.0007
##      0.485     6  0.0006
##       0.49     6  0.0006
##      0.495    10  0.0010
##        0.5     5  0.0005
##      0.505    10  0.0010
##       0.51     5  0.0005
##      0.515     5  0.0005
##       0.52     6  0.0006
##      0.525     4  0.0004
##       0.53     2  0.0002
##       0.54     3  0.0003
##      0.545     3  0.0003
##       0.55     2  0.0002
##      0.555     4  0.0004
##       0.56     1  0.0001
##       0.57     2  0.0002
##      0.575     1  0.0001
##       0.58     1  0.0001
##      0.585     1  0.0001
##      0.605     2  0.0002
##       0.61     2  0.0002
##      0.615     1  0.0001
##       0.62     1  0.0001
##       0.64     1  0.0001
##      0.665     1  0.0001
##      Total 10000  1.0000
# Histogram of the grid simulation with posterior pdf
ggplot(post_sample_2, aes(x = pi_grid_2)) + 
  geom_histogram(aes(y = ..density..), color = "white")+ 
    stat_function(fun = dbeta, args = list(5, 16)) +
lims(x = c(0, 1))
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 2 rows containing missing values (geom_bar).

Exercise 6.6

# Step 1: Define a grid of 9 lambda values
grid_data_3 <- data.frame(lambda_grid = seq(from = 0, to = 8, length = 9))
# Step 2: Evaluate the prior & likelihood at each lambda
grid_data_3 <- grid_data_3 |> 
  mutate(prior=dgamma(lambda_grid,20,5),
         likelihood=dpois(0,lambda_grid)*dpois(1, lambda_grid)*dpois(0,lambda_grid))
# Step 3: Approximate the posterior
grid_data_3 <- grid_data_3 |> 
  mutate(unnormalized=likelihood*prior, 
         posterior = unnormalized/sum(unnormalized))
# Set the seed
set.seed(616)
# Step 4: sample from the discretized posterior
post_sample_3 <- sample_n(grid_data_3, size = 10000, weight = posterior, replace = TRUE)
# Histogram of the grid simulation with posterior pdf
ggplot(post_sample_3, aes(x = lambda_grid)) + 
  geom_histogram(aes(y = ..density..), color = "white") + 
  stat_function(fun = dgamma, args = list(21, 8)) + 
  lims(x = c(0, 8))
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 2 rows containing missing values (geom_bar).

# Step 1: Define a grid of 201 lambda values
grid_data_4 <- data.frame(lambda_grid = seq(from = 0, to = 8, length = 201))
# Step 2: Evaluate the prior & likelihood at each lambda
grid_data_4 <- grid_data_4 |> 
  mutate(prior=dgamma(lambda_grid,20,5),
         likelihood=dpois(0,lambda_grid)*dpois(1, lambda_grid)*dpois(0,lambda_grid))
# Step 3: Approximate the posterior
grid_data_4 <- grid_data_4 |> 
  mutate(unnormalized=likelihood*prior, 
         posterior = unnormalized/sum(unnormalized))
# Set the seed
set.seed(616)
# Step 4: sample from the discretized posterior
post_sample_4 <- sample_n(grid_data_4, size = 10000, weight = posterior, replace = TRUE)
# Histogram of the grid simulation with posterior pdf
ggplot(post_sample_4, aes(x = lambda_grid)) + 
  geom_histogram(aes(y = ..density..), color = "white") + 
  stat_function(fun = dgamma, args = list(21, 8)) + 
  lims(x = c(0, 8))
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 2 rows containing missing values (geom_bar).

Exercise 6.7

# Step 1: Define a grid of 11 mu values
grid_data_5  <- data.frame(mu_grid = seq(from = 5, to = 15, length = 11))
# Step 2: Evaluate the prior & likelihood at each mu
grid_data_5 <- grid_data_5 %>% 
  mutate(prior = dnorm(mu_grid, mean = 10, sd = 1.2),
likelihood = dnorm(7.1, mean = mu_grid, sd = 1.3)*dnorm(8.9, mean = mu_grid, sd = 1.3)*dnorm(8.4, mean = mu_grid, sd = 1.3)*dnorm(8.6, mean = mu_grid, sd = 1.3))
# Step 3: Approximate the posterior
grid_data_5 <- grid_data_5 %>% 
  mutate(unnormalized = likelihood * prior, 
         posterior = unnormalized / sum(unnormalized))
# Plot the grid approximated posterior
ggplot(grid_data_5, aes(x = mu_grid, y = posterior)) + 
  geom_point() + 
  geom_segment(aes(x = mu_grid, xend = mu_grid, y = 0, yend = posterior))

# Set the seed
set.seed(66)
# Step 4: sample from the discretized posterior
post_sample_5 <- sample_n(grid_data_5, size = 10000, weight = posterior, replace = TRUE)
# Specify actual posterior using formula
posterior_mean_1 <- (((1.3)^2*10+4*(1.2)^2*8.25))/(4*(1.2)^2+(1.3)^2)
posterior_variance_1 <- (1.3)^2*(1.2)^2/(4*(1.2)^2+(1.3)^2)
# Histogram of the grid simulation with posterior pdf
ggplot(post_sample_5, aes(x = mu_grid)) + 
  geom_histogram(aes(y = ..density..), color = "white") + 
  stat_function(fun = dnorm, args = list(posterior_mean_1,posterior_variance_1)) + 
  lims(x = c(5, 15))
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 1 rows containing missing values (geom_bar).

# Step 1: Define a grid of 201 mu values
grid_data_6  <- data.frame(mu_grid = seq(from = 5, to = 15, length = 201))
# Step 2: Evaluate the prior & likelihood at each mu
grid_data_6 <- grid_data_6 %>% 
  mutate(prior = dnorm(mu_grid, mean = 10, sd = 1.2),
likelihood = dnorm(7.1, mean = mu_grid, sd = 1.3)*dnorm(8.9, mean = mu_grid, sd = 1.3)*dnorm(8.4, mean = mu_grid, sd = 1.3)*dnorm(8.6, mean = mu_grid, sd = 1.3))
# Step 3: Approximate the posterior
grid_data_6 <- grid_data_6 %>% 
  mutate(unnormalized = likelihood * prior, 
         posterior = unnormalized / sum(unnormalized))
# Plot the grid approximated posterior
ggplot(grid_data_6, aes(x = mu_grid, y = posterior)) + 
  geom_point() + 
  geom_segment(aes(x = mu_grid, xend = mu_grid, y = 0, yend = posterior))

# Set the seed
set.seed(66)
# Step 4: sample from the discretized posterior
post_sample_6 <- sample_n(grid_data_6, size = 10000, weight = posterior, replace = TRUE)
# Specify actual posterior using bayersrules
summarize_normal_normal(mean = 10, sd = 1.2, sigma = 1.3, y_bar = 8.25, n = 4)
##       model     mean     mode       var        sd
## 1     prior 10.00000 10.00000 1.4400000 1.2000000
## 2 posterior  8.64698  8.64698 0.3266577 0.5715398
posterior_mean_2 <- 8.64698
posterior_variance_2 <- 0.3266577
# Histogram of the grid simulation with posterior pdf
ggplot(post_sample_6, aes(x = mu_grid)) + 
  geom_histogram(aes(y = ..density..), color = "white") + 
  stat_function(fun = dnorm, args = list(posterior_mean_2,posterior_variance_2)) + 
  lims(x = c(5, 15))
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 1 rows containing missing values (geom_bar).

Exercise 6.8

  1. To evaluate the U.S national level of support for women’s abortion rights requires a model depending on dozens of parameters. Let θ = (θ1, θ2, …, θk) denote a generic set of k ≥ 1 parameters upon which the Bayesian model depends. In the Bayesian model of “pro-choice” (on abortion) ratio, θ includes 51 parameters that represent the “pro-choice” ratio in each state as well as multiple parameters that define the relationships between support for women’s abortion rights among American people, state-level demographics(including religious beliefs), birth rates, feminist movements, etc. Our posterior analysis thus relies on building the posterior pdf of θ given a set of observed data y on state-level polls, demographics, birth rates, feminist movements, etc.

  2. Grid approximation suffers from the curse of dimensionality when simulating multivariate posteriors: It requires dividing the multidimensional sample space of θ = (θ1, θ2, …, θk) into an extremely fine grid to prevent big gaps in the approximation. In practice, this might not be feasible in terms of time and money.

The increase in unknown dimensions/parameters of interest will bring an exponential grow of the possible sets of parameter values which degrades the performance of a model in searching for optimal parameter values. Adding extra dimensions generally brings in much noise and redundancy, thus leading to more errors. So on one hand, it’s ideal to take more dimensions into consideration in many cases, on the other, algorithms are much harder to design in high dimensions and often come at the cost of computational efficiency.

Exercise 6.9

  1. Neither of the MCMC or grid approximation takes samples directly from the posterior pdf f(π | y). The result is only a sample that behaves close to a random sample taken directly from f(π | y) itself, thus providing an approximation that is nearly indistinguishable from the real yet intractable posterior. Also, we may not have the opportunity to check the simulation results by comparing it to the “real” posterior in practice.

  2. Both of the MCMC and grid approximations can mimic the real posterior model that is either impossible or prohibitively difficult to specify with sufficient samples, to which the extent can be diagnosed by using visual and numeric measures.

  3. While the grid approximation draws independent samples from a discretized approximation of the posterior pdf f(π | y) that’s defined on a grid of π values, MCMC samples are not independent – each subsequent sample value depends directly upon the previous value. The former is more straightforward.

  4. Grid approximation breaks down in more complicated model settings when we work with models that have lots of model parameters. It suffers from the curse of dimensionality when simulating multivariate posteriors: It requires dividing the multidimensional sample space of θ = (θ1, θ2, …, θk) into an extremely fine grid to prevent big gaps in the approximation. In practice, this might not be feasible in terms of time and money. MCMC offers a more flexible alternative. Though it produces a dependent sample which are not drawn from the posterior pdf f(θ | y), this sample will mimic the posterior model so long as the chain length N is big enough.

Exercise 6.10

  1. It is a Markov chain since one’s decision on whether to go to the Thai restaurant on day i+1 (i can be any integer bigger than 1 in this case) is mostly dependent on whether he/she went to the Thai restaurant on day i (one decides whether one goes to the restaurant directly based on the most recent experience, e.g., if the last experience was great/improved, one will be more likely to go to the restaurant this time despite the more previous experience; if the last experience was bad/worse than it used to be, one will be less likely to go to the restaurant this time despite the more previous experience. One’s mind is updated with the latest experience. So if we know θ(i),then θ(i−1) is of no consequence to θ(i+1) – the only information we need to simulate θ (i+1) is the value of θ (i) ). For instance, say if one did not go to the the Thai restaurant the previous day, there’s a 60% chances that he/she will go on day i and a 40% chances that he/she will not; if one did go to the the Thai restaurant the previous day, there’s a 70% chances that he/she will go on day i given she had a great experience there and a 30% chances that he/she will not, or it is 30% likely that he/she will go on day i given she had a bad experience there and 70% likely that he/she will not.

  2. It is not a Markov chain since whether one wins the lottery on day i+1 is not dependent on the whether one won the lottery on day i (i can be any positive integer). Theoretically, at least as it is advertised, lottery is based on something that is random, in which case whether one wins the lottery on different days are independent from each other, the previous has no influence on the later probability of winning the lottery.

  3. It is not a Markov chain since whether one wins the game i+1 against one’s roommate is independent of whether one won game i against one’s roommate (i can be any positive integer). We cannot determine whether one won game i against one’s roommate due to his/her better chess skills or just out of luck (one’s roommate was sleepy at first, got distracted by something else, or tried to humor one out of courtesy, etc.). Nor can we really determine what the fact that one lost game i indicates. So the fact that whether one won game i cannot give us sufficient information on whether one wins the game i+1. Even if we say there is to some extent a dependency of θ (i+1) on θ(i), whether one wins the game i+1 is not determined by the information that whether one won game i alone. We need to take all the previous θ values into account because they indicate one’s chess-playing skills as well as playing form/condition. So in this case we do want to know {θ(1), θ(2),…, θ(i)} to have a picture of one’s ability and playing condition and predict θ(+1i). They are all of consequence to the value of θ (i+1).

Exercise 6.11

# DEFINE the model
bb_model_a <- " 
data {
int<lower = 0, upper = 20> Y; 
}
parameters {
real<lower = 0, upper = 1> pi;
}
model {
Y ~ binomial(20, pi);
pi ~ beta(1, 1); 
}
"
# DEFINE the model
gp_model_b <- " 
data {
int<lower = 0> Y[3]; 
}
parameters {
real<lower = 0> lambda;
}
model {
Y ~ poisson(lambda);
lambda ~ gamma(4, 2); 
}
"
# DEFINE the model
nn_model_c <- "
data {
    vector[4] Y; // y ∈( − ∞, ∞)
} 
parameters {
    real mu; // μ ∈( − ∞, ∞)
} 
model {
   Y ~ normal(mu, 1);
   mu ~ normal(0, 10);
}
"

Exercise 6.12

# DEFINE the model
bb_model <- " 
data {
int<lower = 0, upper = 20> Y; 
}
parameters {
real<lower = 0, upper = 1> pi;
}
model {
Y ~ binomial(20, pi);
pi ~ beta(1, 1); 
}
"
# SIMULATE the posterior
bb_sim <- stan(model_code = bb_model, data = list(Y = 12), chains = 4, iter = 5000*2, seed = 616)
## Trying to compile a simple C file
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -mmacosx-version-min=10.13 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DBOOST_NO_AUTO_PTR  -include '/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/usr/local/include   -fPIC  -Wall -g -O2  -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
##                ^
##                ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
##          ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
## 
## SAMPLING FOR MODEL '6ad93c0f778586eebb70dcce4b36ebce' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.6e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.16 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.033757 seconds (Warm-up)
## Chain 1:                0.038317 seconds (Sampling)
## Chain 1:                0.072074 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL '6ad93c0f778586eebb70dcce4b36ebce' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 5e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.033726 seconds (Warm-up)
## Chain 2:                0.036772 seconds (Sampling)
## Chain 2:                0.070498 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL '6ad93c0f778586eebb70dcce4b36ebce' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 7e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.033787 seconds (Warm-up)
## Chain 3:                0.042405 seconds (Sampling)
## Chain 3:                0.076192 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL '6ad93c0f778586eebb70dcce4b36ebce' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 6e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.033877 seconds (Warm-up)
## Chain 4:                0.041023 seconds (Sampling)
## Chain 4:                0.0749 seconds (Total)
## Chain 4:
# DEFINE the model
gp_model <- " 
data {
  int<lower = 0> Y[2]; 
  }
parameters {
  real<lower = 0> lambda;
}
model {
  Y ~ poisson(lambda);
  lambda ~ gamma(4, 2); 
  }
"
# SIMULATE the posterior
gp_sim <- stan(model_code = gp_model, data = list(Y = c(1,2)), 
               chains = 4, iter = 5000*2, seed = 6160)
## Trying to compile a simple C file
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -mmacosx-version-min=10.13 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DBOOST_NO_AUTO_PTR  -include '/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/usr/local/include   -fPIC  -Wall -g -O2  -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
##                ^
##                ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
##          ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
## 
## SAMPLING FOR MODEL '149017055105e291f5c1c2f5155e7c11' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.1e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.032084 seconds (Warm-up)
## Chain 1:                0.034446 seconds (Sampling)
## Chain 1:                0.06653 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL '149017055105e291f5c1c2f5155e7c11' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 5e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.031665 seconds (Warm-up)
## Chain 2:                0.043001 seconds (Sampling)
## Chain 2:                0.074666 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL '149017055105e291f5c1c2f5155e7c11' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 6e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.032008 seconds (Warm-up)
## Chain 3:                0.038086 seconds (Sampling)
## Chain 3:                0.070094 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL '149017055105e291f5c1c2f5155e7c11' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 5e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.032366 seconds (Warm-up)
## Chain 4:                0.040116 seconds (Sampling)
## Chain 4:                0.072482 seconds (Total)
## Chain 4:
# DEFINE the model
nn_model <- "
data {
    vector[4] Y; // y ∈( − ∞, ∞)
} 
parameters {
    real mu; // μ ∈( − ∞, ∞)
} 
model {
   Y ~ normal(mu, 1);
   mu ~ normal(0, 10);
}
"
# SIMULATE the posterior
nn_sim <- stan(model_code = nn_model, data = list(Y = c(3.1, 1.1, 5.4, 2.6)), 
               chains = 4, iter = 5000*2, seed = 616)
## Trying to compile a simple C file
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -mmacosx-version-min=10.13 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DBOOST_NO_AUTO_PTR  -include '/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/usr/local/include   -fPIC  -Wall -g -O2  -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
##                ^
##                ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
##          ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
## 
## SAMPLING FOR MODEL '1fcb0229ad1a5aaff22aeb1cda675f29' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.2e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.032826 seconds (Warm-up)
## Chain 1:                0.033018 seconds (Sampling)
## Chain 1:                0.065844 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL '1fcb0229ad1a5aaff22aeb1cda675f29' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 4e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.033859 seconds (Warm-up)
## Chain 2:                0.035329 seconds (Sampling)
## Chain 2:                0.069188 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL '1fcb0229ad1a5aaff22aeb1cda675f29' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 3e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.03 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.033536 seconds (Warm-up)
## Chain 3:                0.03316 seconds (Sampling)
## Chain 3:                0.066696 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL '1fcb0229ad1a5aaff22aeb1cda675f29' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 5e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.033876 seconds (Warm-up)
## Chain 4:                0.031146 seconds (Sampling)
## Chain 4:                0.065022 seconds (Total)
## Chain 4:

Exericise 6.13

bb_model_1 <- "
data {
int<lower = 0, upper = 10> Y;
}
parameters {
real<lower = 0, upper = 1> pi;
}

model {
Y ~ binomial(10, pi);
pi ~ beta(3,8);
}
"

bb_sim_1 <- stan(model_code = bb_model_1, data = list(Y = 2), chains = 3, iter = 12000, seed = 616)
## Trying to compile a simple C file
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -mmacosx-version-min=10.13 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DBOOST_NO_AUTO_PTR  -include '/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/usr/local/include   -fPIC  -Wall -g -O2  -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
##                ^
##                ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
##          ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
## 
## SAMPLING FOR MODEL '4d6640c8704630a849adeb9603a58bbd' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.3e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:     1 / 12000 [  0%]  (Warmup)
## Chain 1: Iteration:  1200 / 12000 [ 10%]  (Warmup)
## Chain 1: Iteration:  2400 / 12000 [ 20%]  (Warmup)
## Chain 1: Iteration:  3600 / 12000 [ 30%]  (Warmup)
## Chain 1: Iteration:  4800 / 12000 [ 40%]  (Warmup)
## Chain 1: Iteration:  6000 / 12000 [ 50%]  (Warmup)
## Chain 1: Iteration:  6001 / 12000 [ 50%]  (Sampling)
## Chain 1: Iteration:  7200 / 12000 [ 60%]  (Sampling)
## Chain 1: Iteration:  8400 / 12000 [ 70%]  (Sampling)
## Chain 1: Iteration:  9600 / 12000 [ 80%]  (Sampling)
## Chain 1: Iteration: 10800 / 12000 [ 90%]  (Sampling)
## Chain 1: Iteration: 12000 / 12000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.040208 seconds (Warm-up)
## Chain 1:                0.041641 seconds (Sampling)
## Chain 1:                0.081849 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL '4d6640c8704630a849adeb9603a58bbd' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 7e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:     1 / 12000 [  0%]  (Warmup)
## Chain 2: Iteration:  1200 / 12000 [ 10%]  (Warmup)
## Chain 2: Iteration:  2400 / 12000 [ 20%]  (Warmup)
## Chain 2: Iteration:  3600 / 12000 [ 30%]  (Warmup)
## Chain 2: Iteration:  4800 / 12000 [ 40%]  (Warmup)
## Chain 2: Iteration:  6000 / 12000 [ 50%]  (Warmup)
## Chain 2: Iteration:  6001 / 12000 [ 50%]  (Sampling)
## Chain 2: Iteration:  7200 / 12000 [ 60%]  (Sampling)
## Chain 2: Iteration:  8400 / 12000 [ 70%]  (Sampling)
## Chain 2: Iteration:  9600 / 12000 [ 80%]  (Sampling)
## Chain 2: Iteration: 10800 / 12000 [ 90%]  (Sampling)
## Chain 2: Iteration: 12000 / 12000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.039749 seconds (Warm-up)
## Chain 2:                0.039544 seconds (Sampling)
## Chain 2:                0.079293 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL '4d6640c8704630a849adeb9603a58bbd' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 5e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:     1 / 12000 [  0%]  (Warmup)
## Chain 3: Iteration:  1200 / 12000 [ 10%]  (Warmup)
## Chain 3: Iteration:  2400 / 12000 [ 20%]  (Warmup)
## Chain 3: Iteration:  3600 / 12000 [ 30%]  (Warmup)
## Chain 3: Iteration:  4800 / 12000 [ 40%]  (Warmup)
## Chain 3: Iteration:  6000 / 12000 [ 50%]  (Warmup)
## Chain 3: Iteration:  6001 / 12000 [ 50%]  (Sampling)
## Chain 3: Iteration:  7200 / 12000 [ 60%]  (Sampling)
## Chain 3: Iteration:  8400 / 12000 [ 70%]  (Sampling)
## Chain 3: Iteration:  9600 / 12000 [ 80%]  (Sampling)
## Chain 3: Iteration: 10800 / 12000 [ 90%]  (Sampling)
## Chain 3: Iteration: 12000 / 12000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.041569 seconds (Warm-up)
## Chain 3:                0.039118 seconds (Sampling)
## Chain 3:                0.080687 seconds (Total)
## Chain 3:
mcmc_trace(bb_sim_1, pars = "pi", size = 0.1)

(c) The range of values on the trace plot x-axis is the number of iterations that are kept in the simulation sample for each of the three chains. The maximum value of this range is 6000 since, though the total number of iterations being run for each of the three chain is 12000, we toss out the first 6000 iterations of all three chains as burn-in (Without direct knowledge of the posterior it’s trying to simulate, the Markov chain might start out sampling unreasonable values of π. We want to toss the Markov chain values produced during this learning period – keeping them in our sample might lead to a poor posterior approximation), so we end up with three separate Markov chain samples of size 6000.

mcmc_dens_overlay(bb_sim_1, pars = "pi") + 
  ylab("density")

  1. Since we know that the prior model of π Beta(α,β) is Beta(3,8) and the observed data Y = 2 for the Bin(n=10, π) model, the Beta(α + y, β + n − y) posterior model follows. Thus the posterior model is Beta(5, 16).
plot_beta_binomial(3,8,2,10)

Comparing the two plots above, we can see the MCMC approximation is very similar to the real posterior model (Both indicate that π values around 0.22 are the most likely and the plausible range of π values is between 0 and approximately 0.625).

Exercise 6.14

bb_model_2 <- "
data {
int<lower = 0, upper = 12> Y;
}
parameters {
real<lower = 0, upper = 1> pi;
}

model {
Y ~ binomial(12, pi);
pi ~ beta(4,3);
}
"

bb_sim_2 <- stan(model_code = bb_model_2, data = list(Y = 4), chains = 3, iter = 12000, seed = 616)
## Trying to compile a simple C file
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -mmacosx-version-min=10.13 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DBOOST_NO_AUTO_PTR  -include '/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/usr/local/include   -fPIC  -Wall -g -O2  -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
##                ^
##                ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
##          ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
## 
## SAMPLING FOR MODEL '1593af74ab882209041b8f49c8a160b2' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.7e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.17 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:     1 / 12000 [  0%]  (Warmup)
## Chain 1: Iteration:  1200 / 12000 [ 10%]  (Warmup)
## Chain 1: Iteration:  2400 / 12000 [ 20%]  (Warmup)
## Chain 1: Iteration:  3600 / 12000 [ 30%]  (Warmup)
## Chain 1: Iteration:  4800 / 12000 [ 40%]  (Warmup)
## Chain 1: Iteration:  6000 / 12000 [ 50%]  (Warmup)
## Chain 1: Iteration:  6001 / 12000 [ 50%]  (Sampling)
## Chain 1: Iteration:  7200 / 12000 [ 60%]  (Sampling)
## Chain 1: Iteration:  8400 / 12000 [ 70%]  (Sampling)
## Chain 1: Iteration:  9600 / 12000 [ 80%]  (Sampling)
## Chain 1: Iteration: 10800 / 12000 [ 90%]  (Sampling)
## Chain 1: Iteration: 12000 / 12000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.045919 seconds (Warm-up)
## Chain 1:                0.045808 seconds (Sampling)
## Chain 1:                0.091727 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL '1593af74ab882209041b8f49c8a160b2' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 7e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:     1 / 12000 [  0%]  (Warmup)
## Chain 2: Iteration:  1200 / 12000 [ 10%]  (Warmup)
## Chain 2: Iteration:  2400 / 12000 [ 20%]  (Warmup)
## Chain 2: Iteration:  3600 / 12000 [ 30%]  (Warmup)
## Chain 2: Iteration:  4800 / 12000 [ 40%]  (Warmup)
## Chain 2: Iteration:  6000 / 12000 [ 50%]  (Warmup)
## Chain 2: Iteration:  6001 / 12000 [ 50%]  (Sampling)
## Chain 2: Iteration:  7200 / 12000 [ 60%]  (Sampling)
## Chain 2: Iteration:  8400 / 12000 [ 70%]  (Sampling)
## Chain 2: Iteration:  9600 / 12000 [ 80%]  (Sampling)
## Chain 2: Iteration: 10800 / 12000 [ 90%]  (Sampling)
## Chain 2: Iteration: 12000 / 12000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.039179 seconds (Warm-up)
## Chain 2:                0.040638 seconds (Sampling)
## Chain 2:                0.079817 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL '1593af74ab882209041b8f49c8a160b2' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 7e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:     1 / 12000 [  0%]  (Warmup)
## Chain 3: Iteration:  1200 / 12000 [ 10%]  (Warmup)
## Chain 3: Iteration:  2400 / 12000 [ 20%]  (Warmup)
## Chain 3: Iteration:  3600 / 12000 [ 30%]  (Warmup)
## Chain 3: Iteration:  4800 / 12000 [ 40%]  (Warmup)
## Chain 3: Iteration:  6000 / 12000 [ 50%]  (Warmup)
## Chain 3: Iteration:  6001 / 12000 [ 50%]  (Sampling)
## Chain 3: Iteration:  7200 / 12000 [ 60%]  (Sampling)
## Chain 3: Iteration:  8400 / 12000 [ 70%]  (Sampling)
## Chain 3: Iteration:  9600 / 12000 [ 80%]  (Sampling)
## Chain 3: Iteration: 10800 / 12000 [ 90%]  (Sampling)
## Chain 3: Iteration: 12000 / 12000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.038951 seconds (Warm-up)
## Chain 3:                0.046738 seconds (Sampling)
## Chain 3:                0.085689 seconds (Total)
## Chain 3:
mcmc_trace(bb_sim_2, pars = "pi", size = 0.1)

  1. The range of values on the trace plot x-axis is the number of iterations that are kept in the simulation sample for each of the three chains. The maximum value of this range is 6000 since, though the total number of iterations being run for each of the three chain is 12000, we toss out the first 6000 iterations of all three chains as burn-in (Without direct knowledge of the posterior it’s trying to simulate, the Markov chain might start out sampling unreasonable values of π. We want to toss the Markov chain values produced during this learning period – keeping them in our sample might lead to a poor posterior approximation), so we end up with three separate Markov chain samples of size 6000.

mcmc_dens_overlay(bb_sim_2, pars = "pi") + 
  ylab("density")

  1. Since we know that the prior model of π Beta(α,β) is Beta(4,3) and the observed data Y = 4 for the Bin(n=12, π) model, the Beta(α + y, β + n − y) posterior model follows. Thus the posterior model is Beta(8, 11).
plot_beta_binomial(4,3,4,12)

Comparing the two plots above, we can see the MCMC approximation is very similar to the real posterior model (Both indicate that π values around 0.4 are the most likely and the plausible range of π values is between 0 and approximately 0.8).

Exercise 6.15

gp_model_1 <- " 
data {
  int<lower = 0> Y[3]; 
  }
parameters {
  real<lower = 0> lambda;
}
model {
  Y ~ poisson(lambda);
  lambda ~ gamma(20, 5); 
  }
"

gp_sim_1 <- stan(model_code = gp_model_1, data = list(Y = c(0,1,0)), 
               chains = 4, iter = 10000, seed = 616)
## Trying to compile a simple C file
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -mmacosx-version-min=10.13 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DBOOST_NO_AUTO_PTR  -include '/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/usr/local/include   -fPIC  -Wall -g -O2  -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
##                ^
##                ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
##          ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
## 
## SAMPLING FOR MODEL '388d685cf358af3dac9f743f72371708' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.4e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.14 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.034064 seconds (Warm-up)
## Chain 1:                0.034074 seconds (Sampling)
## Chain 1:                0.068138 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL '388d685cf358af3dac9f743f72371708' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 9e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.034533 seconds (Warm-up)
## Chain 2:                0.033901 seconds (Sampling)
## Chain 2:                0.068434 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL '388d685cf358af3dac9f743f72371708' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 5e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.035472 seconds (Warm-up)
## Chain 3:                0.030281 seconds (Sampling)
## Chain 3:                0.065753 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL '388d685cf358af3dac9f743f72371708' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 4e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.034742 seconds (Warm-up)
## Chain 4:                0.032127 seconds (Sampling)
## Chain 4:                0.066869 seconds (Total)
## Chain 4:
mcmc_trace(gp_sim_1, pars = "lambda", size = 0.1)

mcmc_dens_overlay(gp_sim_1, pars = "lambda") + 
  yaxis_text(TRUE) +
ylab("density")

(c)From the density plots, 2.5 seems to be the most posterior plausible value of λ.

  1. The Gamma-Poisson Bayesian model complements the Poisson structure of data Y with a Gamma prior on λ. Since we know the prior model of λ Gamma(s,r) is Gamma(20,5) and the n=3 independent data points (Y1,Y2,Y3)=(0,1,0), the posterior model Gamma (s+∑y(i), r+n) follows to be Gamma(21,8).
plot_gamma_poisson(20,5,1,3)

Comparing the two plots above, we can see the MCMC approximation is very similar to the real posterior model (Both indicate that λ values around 2.5 are the most likely and the plausible range of λ values is between 0 and approximately 5).

Exercise 6.16

gp_model_2 <- " 
data {
  int<lower = 0> Y[3]; 
  }
parameters {
  real<lower = 0> lambda;
}
model {
  Y ~ poisson(lambda);
  lambda ~ gamma(5, 5); 
  }
"

gp_sim_2 <- stan(model_code = gp_model_2, data = list(Y = c(0,1,0)), 
               chains = 4, iter = 10000, seed = 616)
## Trying to compile a simple C file
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -mmacosx-version-min=10.13 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DBOOST_NO_AUTO_PTR  -include '/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/usr/local/include   -fPIC  -Wall -g -O2  -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
##                ^
##                ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
##          ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
## 
## SAMPLING FOR MODEL '919c81e9b32bed5c97be5be1acd4ed1b' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.1e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.03791 seconds (Warm-up)
## Chain 1:                0.031459 seconds (Sampling)
## Chain 1:                0.069369 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL '919c81e9b32bed5c97be5be1acd4ed1b' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 6e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.04054 seconds (Warm-up)
## Chain 2:                0.031345 seconds (Sampling)
## Chain 2:                0.071885 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL '919c81e9b32bed5c97be5be1acd4ed1b' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 5e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.038349 seconds (Warm-up)
## Chain 3:                0.03136 seconds (Sampling)
## Chain 3:                0.069709 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL '919c81e9b32bed5c97be5be1acd4ed1b' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 5e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.039044 seconds (Warm-up)
## Chain 4:                0.029963 seconds (Sampling)
## Chain 4:                0.069007 seconds (Total)
## Chain 4:
mcmc_trace(gp_sim_2, pars = "lambda", size = 0.1)

mcmc_dens_overlay(gp_sim_2, pars = "lambda") + 
  yaxis_text(TRUE) +
ylab("density")

(c)From the density plots, 0.65 seems to be the most posterior plausible value of λ.

  1. The Gamma-Poisson Bayesian model complements the Poisson structure of data Y with a Gamma prior on λ. Since we know the prior model of λ Gamma(s,r) is Gamma(5,5) and the n=3 independent data points (Y1,Y2,Y3)=(0,1,0), the posterior model Gamma (s+∑y(i), r+n) follows to be Gamma(6,8).
plot_gamma_poisson(5,5,1,3)

Comparing the two plots above, we can see the MCMC approximation is very similar to the real posterior model (Both indicate that λ values around 0.65 are the most likely and the plausible range of λ values is between 0 and approximately 2.5).

Exercise 6.17

nn_model_1 <- "
data {
    vector[4] Y;
} 
parameters {
    real mu;
} 
model {
   Y ~ normal(mu, 1.3);
   mu ~ normal(10, 1.2);
}
"
nn_sim_1 <- stan(model_code = nn_model_1, data = list(Y = c(7.1,8.9,8.4,8.6)), 
               chains = 4, iter = 10000, seed = 606)
## Trying to compile a simple C file
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -mmacosx-version-min=10.13 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DBOOST_NO_AUTO_PTR  -include '/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/usr/local/include   -fPIC  -Wall -g -O2  -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
##                ^
##                ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
##          ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
## 
## SAMPLING FOR MODEL 'ec4ba0e526f6896c9fa6debf52caa01b' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 2e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.034994 seconds (Warm-up)
## Chain 1:                0.033059 seconds (Sampling)
## Chain 1:                0.068053 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'ec4ba0e526f6896c9fa6debf52caa01b' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 7e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.037941 seconds (Warm-up)
## Chain 2:                0.030905 seconds (Sampling)
## Chain 2:                0.068846 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'ec4ba0e526f6896c9fa6debf52caa01b' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 7e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.038829 seconds (Warm-up)
## Chain 3:                0.029572 seconds (Sampling)
## Chain 3:                0.068401 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'ec4ba0e526f6896c9fa6debf52caa01b' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 5e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.037443 seconds (Warm-up)
## Chain 4:                0.030272 seconds (Sampling)
## Chain 4:                0.067715 seconds (Total)
## Chain 4:
mcmc_trace(nn_sim_1, pars = "mu", size = 0.1)

mcmc_dens_overlay(nn_sim_1, pars = "mu") + 
  yaxis_text(TRUE) +
ylab("density")

(c)From the density plots, 8.65 seems to be the most posterior plausible value of μ.

  1. The Normal-Normal Bayesian model complements the Normal data structure with a Normal prior on μ. Since we know the prior model of μ N(μ, σ^2) is N(10,1.21.2) with Y(i)|μ ∼ N(μ,1.31.3) and n=4 independent observations of data (Y1,Y2,Y3,Y4)=(7.1,8.9,8.4,8.6), the posterior model is N(27.78255, 0.3266577).
summarize_normal_normal(mean = 10, sd = 1.2, sigma = 1.3, y_bar = 8.25, n = 4)
##       model     mean     mode       var        sd
## 1     prior 10.00000 10.00000 1.4400000 1.2000000
## 2 posterior  8.64698  8.64698 0.3266577 0.5715398
plot_normal_normal(10,1.2,1.3,8.25,4)

Comparing the two plots above, we can see the MCMC approximation is very similar to the real posterior model (Both indicate that μ values around 8.65 are the most likely and the plausible range of μ values is between 0 and approximately 11.25).

Exercise 6.18

nn_model_2 <- "
data {
    vector[5] Y;
} 
parameters {
    real mu;
} 
model {
   Y ~ normal(mu, 8);
   mu ~ normal(-14, 2);
}
"
nn_sim_2 <- stan(model_code = nn_model_2, data = list(Y = c(-10.1,5.5,0.1,-1.4,11.5)), 
               chains = 4, iter = 10000, seed = 606)
## Trying to compile a simple C file
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -mmacosx-version-min=10.13 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DBOOST_NO_AUTO_PTR  -include '/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/usr/local/include   -fPIC  -Wall -g -O2  -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
##                ^
##                ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
##          ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
## 
## SAMPLING FOR MODEL '616b3aee1763dc9e8d8efa42ba602c9b' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.1e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 1: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 1: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 1: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 1: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 1: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 1: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 1: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 1: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 1: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 1: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 1: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.031338 seconds (Warm-up)
## Chain 1:                0.03126 seconds (Sampling)
## Chain 1:                0.062598 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL '616b3aee1763dc9e8d8efa42ba602c9b' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 4e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 2: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 2: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 2: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 2: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 2: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 2: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 2: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 2: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 2: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 2: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 2: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.030454 seconds (Warm-up)
## Chain 2:                0.036364 seconds (Sampling)
## Chain 2:                0.066818 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL '616b3aee1763dc9e8d8efa42ba602c9b' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 5e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 3: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 3: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 3: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 3: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 3: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 3: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 3: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 3: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 3: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 3: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 3: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.030252 seconds (Warm-up)
## Chain 3:                0.033417 seconds (Sampling)
## Chain 3:                0.063669 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL '616b3aee1763dc9e8d8efa42ba602c9b' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 5e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 10000 [  0%]  (Warmup)
## Chain 4: Iteration: 1000 / 10000 [ 10%]  (Warmup)
## Chain 4: Iteration: 2000 / 10000 [ 20%]  (Warmup)
## Chain 4: Iteration: 3000 / 10000 [ 30%]  (Warmup)
## Chain 4: Iteration: 4000 / 10000 [ 40%]  (Warmup)
## Chain 4: Iteration: 5000 / 10000 [ 50%]  (Warmup)
## Chain 4: Iteration: 5001 / 10000 [ 50%]  (Sampling)
## Chain 4: Iteration: 6000 / 10000 [ 60%]  (Sampling)
## Chain 4: Iteration: 7000 / 10000 [ 70%]  (Sampling)
## Chain 4: Iteration: 8000 / 10000 [ 80%]  (Sampling)
## Chain 4: Iteration: 9000 / 10000 [ 90%]  (Sampling)
## Chain 4: Iteration: 10000 / 10000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.030015 seconds (Warm-up)
## Chain 4:                0.033123 seconds (Sampling)
## Chain 4:                0.063138 seconds (Total)
## Chain 4:
mcmc_trace(nn_sim_2, pars = "mu", size = 0.1)

mcmc_dens_overlay(nn_sim_2, pars = "mu") + 
  yaxis_text(TRUE) +
ylab("density")

(c)From the density plots, -10 seems to be the most posterior plausible value of μ.

  1. The Normal-Normal Bayesian model complements the Normal data structure with a Normal prior on μ. Since we know the prior model of μ N(μ, σ^2) is N(-14, 2^2) with Y(i)|μ ∼ N(μ, 8^2) and n=4 independent observations of data (Y1,Y2,Y3,Y4,Y5)=(-10.1,5.5,0.1,-1.4,11.5), the posterior model is N(-10.4, 3.047619).
summarize_normal_normal(mean = -14, sd = 2, sigma = 8, y_bar = 1.12, n = 5)
##       model  mean  mode      var       sd
## 1     prior -14.0 -14.0 4.000000 2.000000
## 2 posterior -10.4 -10.4 3.047619 1.745743
plot_normal_normal(mean = -14, sd = 2, sigma = 8, y_bar = 1.12, n = 5)

Comparing the two plots above, we can see the MCMC approximation is very similar to the real posterior model (Both indicate that μ values around -10 are the most likely and the plausible range of μ values is between -16 and approximately -4).